7,363 research outputs found
Sequential Monte Carlo with Highly Informative Observations
We propose sequential Monte Carlo (SMC) methods for sampling the posterior
distribution of state-space models under highly informative observation
regimes, a situation in which standard SMC methods can perform poorly. A
special case is simulating bridges between given initial and final values. The
basic idea is to introduce a schedule of intermediate weighting and resampling
times between observation times, which guide particles towards the final state.
This can always be done for continuous-time models, and may be done for
discrete-time models under sparse observation regimes; our main focus is on
continuous-time diffusion processes. The methods are broadly applicable in that
they support multivariate models with partial observation, do not require
simulation of the backward transition (which is often unavailable), and, where
possible, avoid pointwise evaluation of the forward transition. When simulating
bridges, the last cannot be avoided entirely without concessions, and we
suggest an epsilon-ball approach (reminiscent of Approximate Bayesian
Computation) as a workaround. Compared to the bootstrap particle filter, the
new methods deliver substantially reduced mean squared error in normalising
constant estimates, even after accounting for execution time. The methods are
demonstrated for state estimation with two toy examples, and for parameter
estimation (within a particle marginal Metropolis--Hastings sampler) with three
applied examples in econometrics, epidemiology and marine biogeochemistry.Comment: 25 pages, 11 figure
Path storage in the particle filter
This article considers the problem of storing the paths generated by a
particle filter and more generally by a sequential Monte Carlo algorithm. It
provides a theoretical result bounding the expected memory cost by where is the time horizon, is the number of particles and
is a constant, as well as an efficient algorithm to realise this. The
theoretical result and the algorithm are illustrated with numerical
experiments.Comment: 9 pages, 5 figures. To appear in Statistics and Computin
Parallel resampling in the particle filter
Modern parallel computing devices, such as the graphics processing unit
(GPU), have gained significant traction in scientific and statistical
computing. They are particularly well-suited to data-parallel algorithms such
as the particle filter, or more generally Sequential Monte Carlo (SMC), which
are increasingly used in statistical inference. SMC methods carry a set of
weighted particles through repeated propagation, weighting and resampling
steps. The propagation and weighting steps are straightforward to parallelise,
as they require only independent operations on each particle. The resampling
step is more difficult, as standard schemes require a collective operation,
such as a sum, across particle weights. Focusing on this resampling step, we
analyse two alternative schemes that do not involve a collective operation
(Metropolis and rejection resamplers), and compare them to standard schemes
(multinomial, stratified and systematic resamplers). We find that, in certain
circumstances, the alternative resamplers can perform significantly faster on a
GPU, and to a lesser extent on a CPU, than the standard approaches. Moreover,
in single precision, the standard approaches are numerically biased for upwards
of hundreds of thousands of particles, while the alternatives are not. This is
particularly important given greater single- than double-precision throughput
on modern devices, and the consequent temptation to use single precision with a
greater number of particles. Finally, we provide auxiliary functions useful for
implementation, such as for the permutation of ancestry vectors to enable
in-place propagation.Comment: 21 pages, 6 figure
- …